Press and Publications
A selection of coverage about our work with interpretability
News and Presentations

Interpretable AI for Industrial Solutions at the MIT Paris Symposium
Research Scientist Maxime Amram presenting the broad applicability of Interpretable AI solutions in industrial settings at the MIT Paris Symposium

Book Awarded the Frederick W. Lanchester Prize
The textbook by co-founders Dimitris Bertsimas and Jack Dunn, Machine Learning Under A Modern Optimization Lens, is awarded the Frederick W. Lanchester Prize

Interpretable Predictive Maintenance for Hard Drives
Webinar by co-founder Daisy Zhuo discussing Interpretable AI's paper on interpretable predictive maintenance, hosted by storage provider Backblaze

How Interpretable AI Uses MIP to Develop More Accurate Machine Learning Models
Webinar by co-founders Professor Dimitris Bertsimas and Dr Daisy Zhuo in collaboration with Gurobi Optimization

Computing for the Future: Setting New Directions
Co-Founder Professor Dimitris Bertsimas at the launch of the MIT College of Computing on interpretable machine learning methods.

Interpretable AI and its Applications in Financial Services
Co-Founder Professor Dimitris Bertsimas at the 2019 MIT Citi Conference in New York.

Bringing Interpretability to Machine Learning and Artificial intelligence
Co-Founder Dr Daisy Zhuo at the 2019 MIT Europe Conference in Vienna.

A New Generation of Machine Learning Methods
Co-Founder Dr Jack Dunn at the 2018 MIT Research and Development Conference in Boston.
Machine Learning under a Modern Optimization Lens
Our algorithms form the core of the recent graduate-level textbook Machine Learning Under A Modern Optimization Lens by co-founders Bertsimas and Dunn. This book details the transformative effect modern optimization is bringing to the fields of machine learning and artificial intelligence, and is guiding teaching at leading universities like MIT.
The book received the Frederick W. Lanchester Prize in 2021, awarded for the best contribution to operations research and the management sciences published in the last five years.

Selected methodological papers
A selection of the academic papers pioneering our algorithms
Optimal Classification Trees
Dimitris Bertsimas and Jack Dunn
Machine Learning, 2017
The original publication by the co-founders pioneering Optimal Trees. The paper developed the first scalable mixed-integer optimization formulation for training optimal decision trees, and presents empirical results that such trees outperform classical methods such as CART.
Optimal Prescriptive Trees
Dimitris Bertsimas, Jack Dunn, and Nishanth Mundru
INFORMS Journal on Optimization, 2019
This paper extends the optimal trees optimization framework to the field of prescriptive decision making. The resulting Optimal Prescriptive Trees learn how to prescribe directly from observational data, and perform competitively with the best black-box methods for the same task.
Optimal Survival Trees
Dimitris Bertsimas, Jack Dunn, Emma Gibson, and Agni Orfanoudaki
Machine Learning, 2021.
The optimal trees optimization framework is extended to the task of survival analysis. Optimal Survival Trees learn factors that affect survival over a continuous time period, with direct applications to healthcare and predictive maintenance.
Optimal Policy Trees
Maxime Amram, Jack Dunn, and Ying Daisy Zhuo
Machine Learning, under review.
Optimal Policy Trees combines methods from the causal inference literature with global optimality under the Optimal Trees framework. This method yields interpretable prescription policies, is highly scalable, handles both discrete and continuous treatments, and has shown superior performance in multiple expriments.
Sparse high-dimensional regression: Exact scalable algorithms and phase transitions
Dimitris Bertsimas and Bart Van Parys
The Annals of Statistics, 2020
The original publication by our co-founder pioneering Optimal Feature Selection. This paper presents the first scalable approach to exact subset selection in the linear regression problem, and demonstrates superior empirical results compared to existing heuristic approaches.
Sparse Regression: Scalable algorithms and empirical performance
Dimitris Bertsimas, Jean Pauphilet and Bart Van Parys
Statistical Science, 2020
This paper develops an extremely scalable heuristic for solving the Optimal Feature Selection problem that is significantly faster than the original approach, without sacrificing performance.
Sparse classification and phase transitions: A discrete optimization perspective
Dimitris Bertsimas, Jean Pauphilet and Bart Van Parys
Preprint available on arXiv
The Optimal Feature Selection methodology is extended to classification problems, namely logistic regression and support vector machines. Experiments demonstrate the superior performance compared to alternative methods.
From predictive methods to missing data imputation: An optimization approach.
Dimitris Bertsimas, Colin Pawlowski, and Daisy Zhuo
The Journal of Machine Learning Research, 2017
The original publication by the co-founders pioneering Optimal Imputation. The paper formulates the missing data imputation problem as a joint optimization problem and presents a scalable method to solve it to optimality, establishing superior performance to the state of the art.
Interpretable Matrix Completion: A Discrete Optimization Approach
Dimitris Bertsimas and Michael Li
Preprint available on arXiv
The original publication by the co-founders pioneering Interpretable Matrix Completion. The paper uses mixed-integer optimization to formulate the problem of creating an interpretable factorization of a matrix with side information, leading to simple and intuitive recommendation systems.
Fast Exact Matrix Completion: A Unified Optimization Framework for Matrix Completion
Dimitris Bertsimas and Michael Li
Journal of Machine Learning Research, 2020
An extension of Interpretable Matrix Completion to develop a fast and scalable stochastic algorithm for solving the matrix completion problem both with and without side information.
The Voice of Optimization
Dimitris Bertsimas and Bartolomeo Stellato
Machine Learning, 2021
The paper uses Optimal Classification Trees to understand and generalize the logic behind the optimal solutions to continuous and mixed-integer optimization problems, finding solutions in real time much faster than traditional approaches with very little sacrifice in optimality.
Sparse Regression over Clusters: SparClur
Dimitris Bertsimas, Jack Dunn, Lea Kapelevich, and Rebecca Zhang
Optimization Letters, 2021
A sparse version of Optimal Regression Trees with linear predictions where the regression features used in all the leaves are from a common set under a global sparsity constraint. This leads to more interpretable models with competitive performance.